43 research outputs found

    A system for online compression of high-speed network measurements

    Get PDF
    Measuring various metrics of high speed and high capacity networks pro- duces a vast amount of information over a long period of time, making the conventional storage of the data practically ine cient. Such metrics are derived from packet level information and can be represented as time series signals. Thus, they can be ana- lyzed using signal analysis techniques. This paper looks at the Wavelet transform as a method of analyzing and compressing measurement signals (such as delay, utilization, data rate etc.) produced from high-speed networks. A live system can calculate these measurements and then perform wavelet techniques to keep the signi cant information and discard the small variations. An investigation into the choice of an appropriate wavelet is presented along with results both from o -line and on-line experiments. The quality of the decompressed signal is measured by the PSNR and a comparison of compression performance is presented against the lossless tool bzip2

    Compressing computer network measurements using embedded zerotree wavelets

    Get PDF
    Monitoring and measuring various metrics of high data rate and high capacity networks produces a vast amount of information over a long period of time. Characteristics such as throughput and delay are derived from packet level information and can be represented as time series signals. This paper looks at the Embedded Zero Tree algorithm, proposed by Shapiro, in order to compress computer network delay and throughput measurements while preserving the quality of interesting features and controlling the level of quality of the compressed signal. The quality characteristics that are examined are the preservation of the mean square error (MSE), the standard deviation, the general visual quality (the PSNR) and the scaling behavior. Experimental results are obtained to evaluate the behaviour of the algorithm on delay and data rate signals. Finally, a comparison of compression performance is presented against the lossless tool bzip2

    Using wavelets for compression and detecting events in anomalous network traffic

    Get PDF
    Monitoring and measuring various metrics of highdata rate networks produces a vast amount of information over a long period of time making the storage of the monitored data a serious issue. Furthermore, for the collected monitoring data to be useful to network analysts, these measurements need to be processed in order to detect interesting characteristics. In this paper wavelet analysis is used as a multi-resolution analysis tool for compression of data rate measurements. Two known thresholds are suggested for lossy compression and event detection purposes. Results show high compression ratios while preserving the quality (quantitative and visual aspects) and the energy of the signal and detection of sudden changes are achievable

    Wavelet compression techniques for computer network measurements

    Get PDF
    Wavelet transform is a recent signal analysis tool that is already been successfully used in image, video and speech compression applications. This paper looks at the Wavelet transform as a method of compressing computer network measurements produced from high-speed networks. Such networks produce a large amount of information over a long period of time, requiring compression for archiving. An important aspect of the compression is to maintain the quality in important features of signals. In this paper two known wavelet coefficient threshold selection techniques are examined and utilized separately along with an efficient method for storing wavelet coefficients. Experimental results are obtained to compare the behaviour of the two threshold selection schemes on delay and data rate signals, by using the mean square error (MSE) statistic, PSNR and the file size of the compressed output

    A live system for wavelet compression of high speed computer network measurements

    Get PDF
    Monitoring high-speed networks for a long period of time produces a high volume of data, making the storage of this information practically inefficient. To this end, there is a need to derive an efficient method of data analysis and reduction in order to archive and store the enormous amount of monitored traffic. Satisfying this need is useful not only for administrators but also for researchers who run their experiments on the monitored network. The researchers would like to know how their experiments affect the network's behavior in terms of utilization, delay, packet loss, data rate etc. In this paper a method of compressing computer network measurements while preserving the quality in interesting signal characteristics is presented. Eight different mother wavelets are compared against each other in order to examine which one offers the best results in terms of quality in the reconstructed signal. The proposed wavelet compression algorithm is compared against the lossless compression tool bzip2 in terms of compression ratio (C.R.). Finally, practical results are presented by compressing sampled traffic recorded from a live network

    Applying wavelets for the controlled compression of communication network measurements

    Get PDF
    Monitoring and measuring various metrics of high-speed networks produces a vast amount of information over a long period of time making the storage of the metrics a serious issue. Previous work has suggested stream aware compression algorithms, among others, i.e. methodologies that try to organise the network packets in a compact way in order to occupy less storage. However, these methods do not reduce the redundancy in the stream information. Lossy compression becomes an attractive solution, as higher compression ratios can be achieved. However, the important and significant elements of the original data need to be preserved. This work proposes the use of a lossy wavelet compression mechanism that preserves crucial statistical and visual characteristics of the examined computer network measurements and provides significant compression against the original file sizes. To the best of our knowledge, the authors are the first to suggest and implement a wavelet analysis technique for compressing computer network measurements. In this paper, wavelet analysis is used and compared against the Gzip and Bzip2 tools for data rate and delay measurements. In addition this paper provides a comparison of eight different wavelets with respect to the compression ratio, the preservation of the scaling behavior, of the long range dependence, of mean and standard deviation and of the general reconstruction quality. The results show that the Haar wavelet provides higher peak signal-to-noise ratio (PSNR) values and better overall results, than other wavelets with more vanishing moments. Our proposed methodology has been implemented on an on-line based measurement platform and compressed data traffic generated from a live network

    Effect of discrete cosine and wavelet transformation based compression on the long range dependence of communication network performance measurements

    Get PDF
    This paper examines the impact of compression methods on the long-range dependence of communication network traffic measurements. The two compression methods that are examined are based on the Wavelet transformation and the Discrete Cosine Transformation (DCT). In order to measure the length of long-range dependence of a stochastic process, we first have to estimate the Hurst parameter. The Hurst parameter is estimated by using the rescaled range statistic (R/S) method. The Hurst values of the examined signal, before and after the applied compression, are estimated and compared. If the Hurst value of the compressed signal is close to the Hurst value of the uncompressed signal, then the compression algorithm has little interference on the longrange dependence. The results show that Wavelet transformation performs better than the DCT

    A framework for cross-layer measurements in wireless networks

    Get PDF
    This paper formulates a framework for wireless network performance measurements with the scope of being as generic as possible. The methodology utilises a cross-layer approach in order to address the limitations of traditional layered techniques. A lot of work in the research community uses the channel power (Cp) to predict performance metrics in higher layers. There are currently two methods to measure Cp; either by using a spectrum analyser or from WiFi card information (RSSI). The paper discusses the correct configuration of a spectrum analyser (SA), to measure Cp. This paper, also provides a comparison of both SA and RSSI results produced inside an anechoic chamber for three different applications. The behaviour of the RSSI values showed significant discrepancy with both the SA results and what was intuitively expected. The results pinpoint the necessity of a cross-layer approach and the importance of carefully selected and positioned equipment for the accuracy of the measurements

    A framework for cross-layer measurement of 3G and Wi-Fi combined networks

    Get PDF
    3G networks and Wi-Fi networks could complement each other as each has different advantages of coverage and access capacity. A combined 3G and Wi-Fi network is one part of a heterogeneous IP network which has ubiquitous access capacity. However, the characteristics of the lower layers in the wireless network portion of such a heterogeneous IP network could significantly affect the performance of higher layers, and further, the overall performance of the whole network. A single-layer approach to performance analysis could not provide enough information to present the correlation between lower and higher layers. A cross-layer measurement approach for combined 3G and Wi-Fi network is presented which aims to correlate the characteristics of the physical layer (e.g. channel power and signal-to-interference ratio) to key parameters of higher layer (e.g. packet-loss ratio, and round trip time)

    Contemporary sequential network attacks prediction using hidden Markov model

    Get PDF
    Intrusion prediction is a key task for forecasting network intrusions. Intrusion detection systems have been primarily deployed as a first line of defence in a network, however; they often suffer from practical testing and evaluation due to unavailability of rich datasets. This paper evaluates the detection accuracy of determining all states (AS), the current state (CS), and the prediction of next state (NS) of an observation sequence, using the two conventional Hidden Markov Model (HMM) training algorithms, namely, Baum Welch (BW) and Viterbi Training (VT). Both BW and VT were initialised using uniform, random and count-based parameters and the experiment evaluation was conducted on the CSE-CICIDS2018 dataset. Results show that the BW and VT countbased initialisation techniques perform better than uniform and random initialisation when detecting AS and CS. In contrast, for NS prediction, uniform and random initialisation techniques perform better than BW and VT count-based approaches
    corecore